149 research outputs found

    Smooth Transition Garch Models : a Baysian Perspective

    Get PDF
    This paper proposes a new kind of asymmetric GARCH where the conditional variance obeys two différent regimes with a smooth transition function. In one formulation, the conditional variance reacts differently to negative and positive shocks while in a second formulation, small and big shocks have separate effects. The introduction of a threshold allows for a mixed effect. A Bayesian strategy, based on the comparison between posterior and predictive Bayesian residuals, is built for detecting the presence and the shape of non-linearities. The method is applied to the Brussels and Tokyo stock indexes. The attractiveness of an alternative parameterisation of the GARCH model is emphasised as a potential solution to some numerical problems.

    The Redistributive Aspects of ELIE: a simulationapproach

    Get PDF
    This paper analyses the problems linked to the implementation of the Equal Labour Income Equalisation (ELIE) scheme proposed by Kolm (2005). It successively studies the influence of uncertainty in the knowledge of individual incomes, the impact of equivalence scales and finally the consequences of capital accumulation. If uncertainty does not modify fundamentally the equity properties of ELIE, equivalence scales can have non trivial consequences depending on the relation between income and fertility. Finally, capital accumulation introduces strong inequalities in taxation. The paper relies on simulations of the income distribution, calibrated on French data and on the use of taxation indices.inequality, ELIE, income distribution

    EXPLICIT SOLUTIONS FOR THE ASYMPTOTICALLY-OPTIMAL BANDWIDTH IN CROSS VALIDATION

    Get PDF
    Least squares cross-validation (CV) methods are often used for automated bandwidth selection. We show that they share a common structure which has an explicit asymptotic solution. Using the framework of density estimation, we consider unbiased, biased, and smoothed CV methods. We show that, with a Student t(nu) kernel which includes the Gaussian as a special case, the CV criterion becomes asymptotically equivalent to a simple polynomial. This leads to optimal-bandwidth solutions that dominate the usual CV methods, definitely in terms of simplicity and speed of calculation, but also often in terms of integrated squared error because of the robustness of our asymptotic solution. We present simulations to illustrate these features and to give practical guidance on the choice of nu.bandwidth choice; cross validation; nonparametric density es- timation; analytical solution

    Human capital, social capital and scientific research in Europe: an application of linear hierarchical models

    Get PDF
    The theory of human capital is one way to explain individual decisions to produce scientific research. However, this theory, even if it reckons the importance of time in science, is too short for explaining the existing diversity of scientific output. The present paper introduces the social capital of Bourdieu (1980), Coleman (1988) and Putnam (1995) as a necessary complement to explain the creation of scientific human capital. This paper connects these two concepts by means of a hierarchical econometric model which makes the distinction between the individual level (human capital) and the cluster level of departments (social capital). The paper shows how a collection of variables can be built from a bibliographic data base indicating both individual behaviour including mobility and collective characteristics of the department housing individual researchers. The two level hierarchical model is estimated on fourteen European countries using bibliometric data in the fields of economics.Economics of science; human capital; social capital; hierarchical models; European science

    Bayesian Inference in Dynamic Disequilibrium Models : an Application to the Polish Credit Market

    Get PDF
    We review Bayesian inference for dynamic latent variable models using the data augmentation principle. We detail the difficulties of stimulating dynamic latent variables in a Gibbs sampler. We propose an alternative specification of the dynamic disequilibrium model which leads to a simple simulation procedure and renders Bayesian inference fully operational. Identification issues are discussed. We conduct a specification search using the posterior deviance criterion of Spiegelhalter, Best, Carlin, and van der Linde (2002) for a disequilibrium model of the Polish credit market.Latent variables, Disequilibrium models, Bayesian inference, Gibbs sampler, Credit rationing

    Introduction à l'économétrie des mesures de pauvreté

    No full text
    The aim of this paper is to provide an overview of the econometric problems that are linked to the measurement of poverty. Starting from a welfare function, we first define inequality measures and then poverty measures which are based either on indices or on thenotion of stochastic dominance. We then show how the choice data definitions (income or consumption spending, equivalence scales, etc...) can modify or invert empirical conclusions when comparing countries or the time evolution of poverty. The statistical treatment of surveys data implies the use of instruments that are simple at the beginning (order statistics, density estimation), but that can become rapidly complex when one has to estimate standard deviations or even more when one wants to test stochastic dominance. Most of statistical analysis concerning inequality and poverty is descriptive. Explicative models are needed for analysing the dynamics of poverty such as dynamic factor models or quantile regressions.Finally serious empirical analysis of poverty requires the use of survey data such as the UK Family expenditure survey or the French enquête sur le budget des familles. The access to these data set for scientific researchers is very unequal, in particular in France.Le but de ce papier est de présenter une vue générale sur les problèmes économétriques liés à la mesure de la pauvreté. A partir d'une fonction de bien-être, on définit tout d'abord des mesures d'inégalité, puis de pauvreté, basées soit sur des indices, soit surla notion de dominance stochastique. Ensuite, le choix des données (dépense ou revenu, échelles d'équivalence, etc...) peut inverser certaines conclusions empiriques quand il s'agit de comparer les pays entre eux, ou l'évolution de la pauvreté dans le temps. Le traitement statistique des données d'enquête met en jeu des outils qui sont simples au début (statistiques d'ordre, estimation de densités), mais qui deviennent vite complexes quand il s'agit d'estimer des écart-types ou plus encore de tester la dominance stochastique. Un certain nombre de modèles explicatifs vont permettre de passer d'une analyse essentiellement descriptive à une explicitation des ressorts de la pauvreté au moyen par exemple des régressions quantiles et des modèles dynamiques à facteurs. Enfin, l'analyse des phénomènes de pauvreté nécessite l'accès à des données individuelles d'enquête. Selon les pays, l'accès à ces données est très inégal pour les chercheurs, en particulier pour la France

    Modélisation bayésienne non linéaire du taux d’intérêt de court terme américain : l’aide des outils non paramétriques

    Get PDF
    Cet article a pour objet l’investigation des modèles empiriques de taux d’intérêt de court terme sur données américaines. Il utilise une combinaison de méthodes classiques non paramétriques et de méthodes bayésiennes paramétriques. La forme des fonctions de dérive et de volatilité du modèle discrétisé est tout d’abord examinée au moyen d’une analyse non paramétrique préliminaire. Le texte développe ensuite une méthode bayésienne de choix de modèles qui est basée sur la capacité d’un modèle à minimiser la distance de Hellinger entre la densité prédictive a posteriori du modèle discrétisé et la densité de l’échantillon observé. Une discrétisation du modèle en temps continu est estimée en utilisant différentes variantes paramétriques allant du modèle à variance constante jusqu’à différents types de modèles de switching suggérés par l’analyse non paramétrique préliminaire.This paper investigates empirical models of the US short term interest rate. It make use of a combination of classical non-parametric methods and of parametric bayesian methods. In a first step, it investigates the shape of drift and volatility functions using non parametric tools. The paper then develops a bayesian approach to model selection based on the minimisation of the Hellinger distance between the posterior predictive density of a discretised model and a non-parametric estimation of the data density. A discretisation of various parametric formulations are then estimated, ranging between constant elasticity of variance to switching regimes

    The Redistributive Aspects of ELIE: a simulation<br />approach

    Get PDF
    This paper analyses the problems linked to the implementation of the Equal Labour Income Equalisation (ELIE) scheme proposed by Kolm (2005). It successively studies the influence of uncertainty in the knowledge of individual incomes, the impact of equivalence scales and finally the consequences of capital accumulation. If uncertainty does not modify fundamentally the equity properties of ELIE, equivalence scales can have non trivial consequences depending on the relation between income and fertility. Finally, capital accumulation introduces strong inequalities in taxation. The paper relies on simulations of the income distribution, calibrated on French data and on the use of taxation indices
    corecore